Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
Bioengineering (Basel) ; 10(5)2023 May 05.
Article in English | MEDLINE | ID: covidwho-20244850

ABSTRACT

The COVID-19 pandemic has posed unprecedented challenges to global healthcare systems, highlighting the need for accurate and timely risk prediction models that can prioritize patient care and allocate resources effectively. This study presents DeepCOVID-Fuse, a deep learning fusion model that predicts risk levels in patients with confirmed COVID-19 by combining chest radiographs (CXRs) and clinical variables. The study collected initial CXRs, clinical variables, and outcomes (i.e., mortality, intubation, hospital length of stay, Intensive care units (ICU) admission) from February to April 2020, with risk levels determined by the outcomes. The fusion model was trained on 1657 patients (Age: 58.30 ± 17.74; Female: 807) and validated on 428 patients (56.41 ± 17.03; 190) from the local healthcare system and tested on 439 patients (56.51 ± 17.78; 205) from a different holdout hospital. The performance of well-trained fusion models on full or partial modalities was compared using DeLong and McNemar tests. Results show that DeepCOVID-Fuse significantly (p < 0.05) outperformed models trained only on CXRs or clinical variables, with an accuracy of 0.658 and an area under the receiver operating characteristic curve (AUC) of 0.842. The fusion model achieves good outcome predictions even when only one of the modalities is used in testing, demonstrating its ability to learn better feature representations across different modalities during training.

2.
30th International Conference on Computers in Education Conference, ICCE 2022 ; 1:527-536, 2022.
Article in English | Scopus | ID: covidwho-2288026

ABSTRACT

We aim to gain insight into technology-enhanced literacy learning for kindergarten students during the COVID-19 pandemic by exploring a novice kindergarten teacher's practice of multiliteracies pedagogy in his virtual kindergarten classroom. This qualitative case study collected data from multiple sources such as virtual interviews and classroom observations, the Kindergarten Program (KP) document, teacher's reflective notes, lesson plans, students' artefacts, and researchers' observational notes and reflective journals. This study found that although the novice kindergarten teacher provided various multimodal learning opportunities for students, his literacy practice emphasized phonological awareness, phonemic awareness, and letter-sound correspondence. Also, he faced numerous challenges due to inadequate teacher preparation and professional development, inconsistency of the quality and utility of technology, constraints of virtual learning for young learners, varying degrees of parental support, and challenges of implementing multiliteracies pedagogy with young children virtually. This study contributes to the existing literature on online learning for kindergarten students and expands the burgeoning multiliteracies research from physical to virtual learning environments. Also, this study demonstrates how virtual learning opens up opportunities to advance the multiliteracies pedagogy and highlights the importance of strengthening teacher education programs and providing continuous professional development for teachers. © 30th International Conference on Computers in Education Conference, ICCE 2022 - Proceedings.

3.
30th International Conference on Computers in Education Conference, ICCE 2022 ; 2:173-177, 2022.
Article in English | Scopus | ID: covidwho-2286931

ABSTRACT

This study explores students' preferences for the various online learning activities that leveraged digital learning tools. Quantitative and qualitative data were collected from 23 education major students who learned online during the COVID-19 pandemic. Students found the multimodal activities effective in making them stay focused, engaged and acquire new knowledge and skills at a deeper level. © ICCE 2022.All rights reserved.

4.
2022 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2022 ; 2022-October:409-414, 2022.
Article in English | Scopus | ID: covidwho-2152536

ABSTRACT

The three times increase of SonyLiv viewers during the Tokyo Olympic, the 10% hike of YouTube users during the isolation era of covid-pandemic, and the 19% growth in Netflix user count due to the fastest growth of OTT, etc. have made the digital platform's mode all-time active and specific. The hourly increase of users' interactions and the e-commerce platform's desire of letting users engage on their sites are pushing researchers to shape the virtual digital web as user specific and revenue-oriented. This paper develops a deep learning-based approach for building a movie recommendation system with three main aspects: (a) using a knowledge graph to embed text and meta information of movies, (b) using multi-modal information of movies like audio, visual frames, text summary, meta data information to generate movie/user representations without directly using rating information;this multi-modal representation can help in coping up with cold-start problem of recommendation system (c) a graph attention network based approach for developing regression system. For meta encoding, we have built knowledge graph from the meta information of the movies directly. For movie-summary embedding, we extracted nouns, verbs, and object to build a knowledge graph with head-relation-tail relationships. A deep neural network, as well as Graph attention networks, are utilized for measuring performance in terms of RMSE score. The proposed system is tested on an extended MovieLens-100K data-set having multi-modal information. Experimental results establish that only rating-based embeddings in the current setup outperform the state-of-the-art techniques but usage of multi-modal information in embedding generation performs better than its single-modal counterparts. 1. © 2022 IEEE.

5.
2022 IEEE International Conference on Electrical, Computer, and Energy Technologies, ICECET 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2063247

ABSTRACT

Online exams have been the standard approach adopted by universities and institutes because of the COVID-19 pandemic which forces the world to go towards distance learning and online exams. But with this approach come the challenges such as online exam proctoring which is considered one of the most difficult challenges to solve. It is a must to ensure the academic honesty and credibility of the online exam. Existing proctoring techniques require a few proctors to observe a huge number of students to detect cheating students, and due to it is time-consuming and labor-intensive, we implemented multi-modalities to detect the student's activity during the online exam using a webcam and sent a report to the proctor for the suspected student. Those modalities are head-pose, object detection and eye-gaze estimation. This proposed solution is tested and evaluated on 29 students with a total of four exam sessions to ensure the effectiveness of our proposed solution. The events' detection accuracy of the multi-modalities experiment was 95.69%. © 2022 IEEE.

6.
4th International Conference on Communications, Information System and Computer Engineering, CISCE 2022 ; : 605-608, 2022.
Article in English | Scopus | ID: covidwho-2018629

ABSTRACT

The pneumonia epidemic spread by the 2019 new coronavirus(2019-nCoV) has affected people's lives in any aspects, and has aroused widespread concern in global public opinion. In order to better grasp the real public opinion situation on the Internet and ensure the progress of epidemic prevention and public opinion analysis, this paper conducts research on netizen sentiment analysis for epidemic-related topics in the Internet community, and proposes a multimodal feature fusion solution. For the fusion of image and text modalities, Bi-LSTM and Bi-GRU are used to further learn the intrinsic correlation between modalities on the basis of bidirectional transformer feature fusion, and an image-based multi-scale feature fusion method is proposed, which can better solve the problem in this task. Experiments show that the method proposed in this paper is better than the current mainstream multimodal sentiment analysis methods. © 2022 IEEE.

7.
Bioengineering (Basel) ; 9(7)2022 Jul 13.
Article in English | MEDLINE | ID: covidwho-1938682

ABSTRACT

A comprehensive medical image-based diagnosis is usually performed across various image modalities before passing a final decision; hence, designing a deep learning model that can use any medical image modality to diagnose a particular disease is of great interest. The available methods are multi-staged, with many computational bottlenecks in between. This paper presents an improved end-to-end method of multimodal image classification using deep learning models. We present top research methods developed over the years to improve models trained from scratch and transfer learning approaches. We show that when fully trained, a model can first implicitly discriminate the imaging modality and then diagnose the relevant disease. Our developed models were applied to COVID-19 classification from chest X-ray, CT scan, and lung ultrasound image modalities. The model that achieved the highest accuracy correctly maps all input images to their respective modality, then classifies the disease achieving overall 91.07% accuracy.

8.
EAI/Springer Innovations in Communication and Computing ; : 107-118, 2023.
Article in English | Scopus | ID: covidwho-1919564

ABSTRACT

The importance of sustainable mobility has also been confirmed this year by the global pandemic of COVID-19, which has reduced the mobility of the population, thereby it has significantly reduced the level of dust, noise, and air pollution from the traffic in most European countries, including the Slovak Republic. App-based and shared-ride services have become highly popular and offer a level of convenience unseen before in the urban mobility systems all over the world. Individual car transport dominates at the expense of sustainable modes of transport in most Slovak cities. The city of Nitra is no exception, as the high number of trips during the peak-hours often leads to severe traffic congestion. One way to contribute to better condition is multimodality that allows urban residents to choose from a range of alternative travel choices. The aim of this paper is to assess the possibilities of multimodality in the context of short distance moves in the city of Nitra as well as to analyse how the change of mode choice variability affects the urban mobility behaviour. For the purpose of meet the objectives of this paper, a marketing research was conducted. The research findings show that the travel behaviour of Nitra’s citizens does not show the elements of sustainable urban mobility as the current infrastructure and overall opportunities are limited. Our findings point to significant differences in attitudes of residents from different urban areas. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

9.
Computers, Materials and Continua ; 72(3):4357-4374, 2022.
Article in English | Scopus | ID: covidwho-1836518

ABSTRACT

The Coronavirus Disease 2019 (COVID-19) pandemic poses the worldwide challenges surpassing the boundaries of country, religion, race, and economy. The current benchmark method for the detection of COVID-19 is the reverse transcription polymerase chain reaction (RT-PCR) testing. Nevertheless, this testing method is accurate enough for the diagnosis of COVID-19. However, it is time-consuming, expensive, expert-dependent, and violates social distancing. In this paper, this research proposed an effective multi-modality-based and feature fusion-based (MMFF) COVID-19 detection technique through deep neural networks. In multi-modality, we have utilized the cough samples, breathe samples and sound samples of healthy as well as COVID-19 patients from publicly available COSWARA dataset. Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach. Several useful features were extracted from the aforementioned modalities that were then fed as an input to long short-term memory recurrent neural network algorithms for the classification purpose. Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach. The experimental results showed that our proposed approach outperformed compared to four baseline approaches published recently. We believe that our proposed technique will assists potential users to diagnose the COVID-19 without the intervention of any expert in minimum amount of time. © 2022 Tech Science Press. All rights reserved.

10.
2021 Ethics and Explainability for Responsible Data Science Conference, EE-RDS 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1741176

ABSTRACT

Since 2019, COVID-19 has been a major problem for the world's population. COVID-19 is known for its fast transmission and strong infection. Therefore, how to reduce the burden of medical system is becoming a hot topic in current research. Previous researchers have used deep learning techniques to effectively classify COVID-19. Although the results are remarkable, the input method (X-ray images) is simple. Therefore, a new multi-modality fusion network is proposed in this paper. In this network, the spatial and structural feature information in the image were highlighted by means of thermal map. Experiments show the effectiveness of the proposed network. © 2021 IEEE.

11.
1st IEEE International Conference on Advanced Learning Technologies on Education and Research, ICALTER 2021 ; 2021.
Article in Spanish | Scopus | ID: covidwho-1730916

ABSTRACT

Centennial teenagers have complex multimodal literacy practices that need to be examined for incorporation and assessment in higher education. This article explores how students in a distance written communication course at a Peruvian university use multimodal literacy practices to express the complexity of their thematic interests: gender violence, technology and video games, and the COVID-19 pandemic. For this, we highlight how students use various semiotic resources in order to represent their critical perspective on their thematic choices and the challenges posed by digital writing in the construction of their identity. © 2021 IEEE.

12.
2021 IEEE International Conference on Big Data, Big Data 2021 ; : 5854-5858, 2021.
Article in English | Scopus | ID: covidwho-1730857

ABSTRACT

The coronavirus disease 2019 (COVID-19) is an infectious disease with high transmissibility and acquired through the severe acute respiratory syndrome coronavirus 2 (SARS-COV-2). Scientists, physicians, and health officials are seeking innovative approaches to understand the complex COVID-19 pandemic pathway and decrease its morbidity and mortality. Incorporating artificial intelligence and data science techniques across the health science domain could improve disease surveillance, intervention planning, and policymaking. In this paper, we report our effort on the deployment of multimodal big data analytics to improve pandemic surveillance and preparedness. A common challenge for conducting multimodal big data analytics in clinical and public health settings is the issue of the integration of multidimensional heterogeneous data sources. Additional challenges for developers are explaining decisions and actions made by intelligent systems to human users, maintaining interpretability between different data sources, and privacy of health information. We present Urban Population Health Observatory (UPHO), an explainable knowledge-based multimodal data analytics platform to facilitate CoVID-19 surveillance by integrating a large volume of multimodal multidimensional, heterogenous data including social determinants of health indicators, clinical and population health data. © 2021 IEEE.

13.
2021 IEEE International Conference on Big Data, Big Data 2021 ; : 4659-4666, 2021.
Article in English | Scopus | ID: covidwho-1730852

ABSTRACT

The method wherein a human expert diagnose a patient for COVID-19 with the help of a chest CT scan or X-ray image could be one of the most reliable methods. However, this method of diagnosis is challenging and non-scalable while considering limited medical-care infrastructure and disease spread rate. We train COVID-19 diagnosis models for classification using both the image modalities, chest CT scan and X-ray datasets. We have used fusion approach for multimodal data fusion and proposed two variants. The first model is trained using an automated deep learning approach and in the second model features from the images are extracted using transfer learning approach followed by fine tuning of model. The performance of these models are evaluated with metrics like testing accuracy, recall, precision and f1-score. False negatives are critical and to ensure a smaller number of false negatives, cost-sensitive learning is enforced. The cost-sensitive convnet model achieves an accuracy of 97%. © 2021 IEEE.

14.
IEEE Internet Computing ; 26(1):60-67, 2022.
Article in English | Scopus | ID: covidwho-1704110

ABSTRACT

The motivation of this work is to build a multimodal-based COVID-19 pandemic forecasting platform for a large-scale academic institution to minimize the impact of COVID-19 after resuming academic activities. The design of this multimodality work is steered by video, audio, and tweets. Before conducting COVID-19 prediction, we first trained diverse models, including traditional machine learning models (e.g., Naive Bayes, support vector machine, and TF-IDF) and deep learning models [e.g., long short-term memory (LSTM), MobileNetV2, and SSD], to extract meaningful information from video, audio, and tweets by 1) detecting and counting face masks, 2) detecting and counting cough for potential infected cases, and 3) conducting sentiment analysis based on COVID-19-related tweets. Finally, we fed the multimodal analysis results together with daily confirmed cases data and social distancing metrics into the LSTM model to predict the daily increase rate of confirmed cases for the next week. Important observations with supporting evidence are presented. © 1997-2012 IEEE.

15.
PeerJ Comput Sci ; 7: e688, 2021.
Article in English | MEDLINE | ID: covidwho-1471157

ABSTRACT

BACKGROUND: Rumor detection is a popular research topic in natural language processing and data mining. Since the outbreak of COVID-19, related rumors have been widely posted and spread on online social media, which have seriously affected people's daily lives, national economy, social stability, etc. It is both theoretically and practically essential to detect and refute COVID-19 rumors fast and effectively. As COVID-19 was an emergent event that was outbreaking drastically, the related rumor instances were very scarce and distinct at its early stage. This makes the detection task a typical few-shot learning problem. However, traditional rumor detection techniques focused on detecting existed events with enough training instances, so that they fail to detect emergent events such as COVID-19. Therefore, developing a new few-shot rumor detection framework has become critical and emergent to prevent outbreaking rumors at early stages. METHODS: This article focuses on few-shot rumor detection, especially for detecting COVID-19 rumors from Sina Weibo with only a minimal number of labeled instances. We contribute a Sina Weibo COVID-19 rumor dataset for few-shot rumor detection and propose a few-shot learning-based multi-modality fusion model for few-shot rumor detection. A full microblog consists of the source post and corresponding comments, which are considered as two modalities and fused with the meta-learning methods. RESULTS: Experiments of few-shot rumor detection on the collected Weibo dataset and the PHEME public dataset have shown significant improvement and generality of the proposed model.

16.
Inf Process Manag ; 59(1): 102782, 2022 Jan.
Article in English | MEDLINE | ID: covidwho-1446740

ABSTRACT

In the early diagnosis of the Coronavirus disease (COVID-19), it is of great importance for either distinguishing severe cases from mild cases or predicting the conversion time that mild cases would possibly convert to severe cases. This study investigates both of them in a unified framework by exploring the problems such as slight appearance difference between mild cases and severe cases, the interpretability, the High Dimension and Low Sample Size (HDLSS) data, and the class imbalance. To this end, the proposed framework includes three steps: (1) feature extraction which first conducts the hierarchical segmentation on the chest Computed Tomography (CT) image data and then extracts multi-modality handcrafted features for each segment, aiming at capturing the slight appearance difference from different perspectives; (2) data augmentation which employs the over-sampling technique to augment the number of samples corresponding to the minority classes, aiming at investigating the class imbalance problem; and (3) joint construction of classification and regression by proposing a novel Multi-task Multi-modality Support Vector Machine (MM-SVM) method to solve the issue of the HDLSS data and achieve the interpretability. Experimental analysis on two synthetic and one real COVID-19 data set demonstrated that our proposed framework outperformed six state-of-the-art methods in terms of binary classification and regression performance.

17.
Hum Factors ; : 18720821990162, 2021 Feb 01.
Article in English | MEDLINE | ID: covidwho-1058151

ABSTRACT

OBJECTIVE: We review the effects of COVID-19 on the human sense of smell (olfaction) and discuss implications for human-system interactions. We emphasize how critical smell is and how the widespread loss of smell due to COVID-19 will impact human-system interaction. BACKGROUND: COVID-19 reduces the sense of smell in people who contract the disease. Thus far, olfaction has received relatively little attention from human factors/ergonomics professionals. While smell is not a primary means of human-system communication, humans rely on smell in many important ways related to both quality of life and safety. METHOD: We briefly review and synthesize the rapidly expanding literature through September 2020 on the topic of smell loss caused by COVID-19. We interpret findings in terms of their relevance to human factors/ergonomics researchers and practitioners. RESULTS: Since March 2020 dozens of articles have been published that report smell loss in COVID-19 patients. The prevalence and duration of COVID-19-related smell loss is still under investigation, but the available data suggest that it may leave many people with long-term deficits and distortions in sense of smell. CONCLUSION: We suggest that the human factors/ergonomics community could become more aware of the importance of the sense of smell and focus on accommodating the increasing number of people with reduced olfactory performance. APPLICATION: We present examples of how olfaction can augment human-system communication and how human factors/ergonomics professionals might accommodate people with olfactory dysfunction. While seemingly at odds, both of these goals can be achieved.

18.
Med Image Anal ; 69: 101975, 2021 04.
Article in English | MEDLINE | ID: covidwho-1039485

ABSTRACT

The outbreak of COVID-19 around the world has caused great pressure to the health care system, and many efforts have been devoted to artificial intelligence (AI)-based analysis of CT and chest X-ray images to help alleviate the shortage of radiologists and improve the diagnosis efficiency. However, only a few works focus on AI-based lung ultrasound (LUS) analysis in spite of its significant role in COVID-19. In this work, we aim to propose a novel method for severity assessment of COVID-19 patients from LUS and clinical information. Great challenges exist regarding the heterogeneous data, multi-modality information, and highly nonlinear mapping. To overcome these challenges, we first propose a dual-level supervised multiple instance learning module (DSA-MIL) to effectively combine the zone-level representations into patient-level representations. Then a novel modality alignment contrastive learning module (MA-CLR) is presented to combine representations of the two modalities, LUS and clinical information, by matching the two spaces while keeping the discriminative features. To train the nonlinear mapping, a staged representation transfer (SRT) strategy is introduced to maximumly leverage the semantic and discriminative information from the training data. We trained the model with LUS data of 233 patients, and validated it with 80 patients. Our method can effectively combine the two modalities and achieve accuracy of 75.0% for 4-level patient severity assessment, and 87.5% for the binary severe/non-severe identification. Besides, our method also provides interpretation of the severity assessment by grading each of the lung zone (with accuracy of 85.28%) and identifying the pathological patterns of each lung zone. Our method has a great potential in real clinical practice for COVID-19 patients, especially for pregnant women and children, in aspects of progress monitoring, prognosis stratification, and patient management.


Subject(s)
COVID-19/diagnostic imaging , Lung/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Machine Learning , Male , Middle Aged , SARS-CoV-2 , Severity of Illness Index , Tomography, X-Ray Computed , Ultrasonography , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL